Upcoming Event: Oden Institute & College of Natural Sciences
Diana Cai, Research Fellow, Center for Computational Mathematics at Flatiron Institute
3:30 – 5PM
Thursday Jan 29, 2026
Modern scientific discovery is often constrained by noisy and expensive data, and further challenges arise from the need to model complex latent processes. Scientists are increasingly turning to probabilistic models, which offer a principled framework for uncertainty quantification and a vehicle for encoding domain-based intuition. Yet computational challenges frequently stand between modeling and scientific innovation. In particular, posterior inference over latent variables remains a central bottleneck.
Variational inference (VI) methods recast posterior inference as an optimization problem. Enabled by advances in automatic differentiation, VI is now broadly accessible to scientists through “black-box” VI (BBVI) methods based on stochastic gradient descent. But BBVI often converges slowly due to noisy gradients and sensitivity to learning parameters, especially for more expressive variational families.
In this talk, I present Batch-and-Match (BaM), a new approach to BBVI based on matching the scores of the variational and target distributions on a batch of samples. BaM avoids stochastic gradient descent, admits closed-form updates for full-covariance Gaussians, and converges with significantly fewer gradient evaluations than standard BBVI. I then discuss extensions to high dimensions and richer variational families. Using materials design as a motivating application, I show how variational inference, combined with physics knowledge, accelerates the prediction of stable materials. I conclude by outlining a broader vision for probabilistic machine learning and its role in modern scientific discovery.
Diana Cai is a research fellow in the Center for Computational Mathematics at the Flatiron Institute. Her research spans the areas of machine learning and statistics, and focuses on developing probabilistic tools motivated by applications in the natural sciences. Previously, Diana obtained a Ph.D. in computer science at Princeton University, where she was advised by Ryan Adams and Barbara Engelhardt. Her work has been recognized by a Google Ph.D. Fellowship in Machine Learning, Rising Stars in EECS, a Rising Stars in Machine Learning Award from the University of Maryland, a School of Engineering and Applied Science Award for Excellence from Princeton University, and spotlight paper awards at the NeurIPS and ICML conferences.